Skip to main content
Glama
Bichev
by Bichev
NoteGPT_Your API is not an MCP summary.txt6.48 kB
### Summary David from Neon, a serverless Postgres provider, presents an insightful discussion on MCP servers—what they are, common pitfalls in building them, and best practices for effective implementation. MCP, a protocol developed by Anthropic, allows large language models (LLMs) to interface with real-world applications and services by providing structured context and interaction methods. This protocol is rapidly gaining adoption, with major players like OpenAI, Claude, and Google Gemini embracing it.David explains that MCP servers essentially expose three key components to LLMs: tools (actions the LLM can perform), resources, and prompts. Tools are the most critical, representing executable actions such as creating a Postgres database. Using a demo example, he shows how an LLM can request Neon’s MCP server to provision a real database and build a functional app.Although MCP servers can be built from scratch (Neon’s server is about 500 lines of code), many companies opt to autogenerate them from existing OpenAPI schemas. While fast and convenient, this approach has significant drawbacks. APIs tend to have a large number of endpoints, which confuse LLMs if exposed wholesale as tools. LLMs perform better with a smaller, curated set of well-described tools. API descriptions are also often written for humans and need to be rewritten with LLMs in mind—clear, direct, and enriched with examples.David emphasizes that APIs were designed for low-level resource management, which suits developers but not LLMs. LLMs think in terms of goals and tasks rather than granular resource control. Therefore, MCP servers should expose task-oriented, purpose-built tools rather than raw API endpoints. For example, Neon does not just expose a generic “run SQL” tool but provides dedicated tools for preparing and completing database migrations, helping LLMs test and validate changes safely.In conclusion, David recommends a hybrid strategy for building MCP servers: start with autogenerated tools but prune them heavily, rewrite tool descriptions for clarity and effectiveness, add specialized tools that simplify complex workflows, and implement thorough tests (evals) to ensure LLMs use the server correctly. He invites further discussion with the community about MCP servers, tests, authentication, and databases.### Highlights - 🚀 MCP is a new protocol enabling LLMs to interact with real-world apps by exposing tools, resources, and prompts. - 🛠️ Tools are the core concept of MCP servers representing actions LLMs can perform, such as creating a Postgres database. - 🔥 Autogenerating MCP servers from OpenAPI schemas is easy but leads to poor LLM performance due to tool overload and unclear descriptions. - 🎯 LLMs require a curated, goal-oriented set of tools rather than a massive list of low-level API endpoints. - 📚 Writing descriptions for MCP tools needs to be tailored for LLMs, with clarity and examples, unlike traditional API docs. - 🧪 Implementing tests (evals) is critical to verify that LLMs call the right tools in the right way and to iteratively improve tool design. - 💡 Purpose-built tools, such as those for database migrations, enhance usability and safety beyond what generic API endpoints provide.### Key Insights - ⚙️ **MCP’s Role in Bridging LLMs and Applications:** MCP servers are a key innovation that standardizes how LLMs consume external application functionality. Unlike traditional APIs designed for developers, MCP focuses on contextualizing and enabling LLMs to perform meaningful tasks, which is essential as LLMs become integrated into a variety of software ecosystems. This shift requires rethinking API design from the ground up. - 🤯 **The Danger of Tool Overload:** Simply exposing all API endpoints as MCP tools leads to cognitive overload for LLMs, degrading performance. Despite advances in token limits and model size, LLMs struggle with large toolsets because they need concise context and clear decision boundaries. This insight highlights the importance of minimalism and curation in MCP server design, mirroring UX principles in traditional software. - ✍️ **Rewriting API Descriptions for LLMs:** Effective MCP tool descriptions differ from typical API documentation. They must be explicit, unambiguous, and example-rich to guide LLMs’ decisions. This requires a paradigm shift in writing and documenting APIs, emphasizing communication with AI agents rather than humans. It also opens opportunities for tooling and frameworks specialized in LLM-friendly API authoring. - 🔄 **Hybrid Approach to MCP Server Development:** Autogeneration from OpenAPI can serve as a starting point but needs substantial manual curation. This hybrid method balances developer productivity with quality, allowing rapid prototyping followed by refinement. It suggests MCP server tooling ecosystems should support incremental editing and evaluation workflows rather than purely automated generation. - 🎯 **Task-Oriented Design vs. Resource Management:** LLMs act more like goal-driven agents than low-level automation scripts. Therefore, MCP tools should encapsulate higher-level tasks rather than expose raw CRUD operations. For example, database migration is better handled with specialized prepare/commit tools rather than a generic SQL executor. This insight encourages API designers to create LLM-centric abstractions that improve reliability and usability. - 🧪 **The Importance of Eval Testing:** Because LLM outputs are non-deterministic, continuous testing (evals) is essential to verify that tools are invoked correctly and that descriptions effectively guide LLMs. This parallels software testing principles but applied to AI behavior, highlighting the need for new QA methodologies tailored to AI-driven integrations. - 🔐 **Beyond Basic MCP Server Features:** The talk hints at further development areas such as authentication, security, and complex workflows that are crucial for production MCP servers. These aspects are critical for real-world adoption but require careful design to maintain usability and safety when interacting with powerful LLM agents.David’s talk provides a comprehensive foundation for anyone looking to build MCP servers. It stresses the importance of thoughtful design, testing, and iteration—helping companies avoid common pitfalls and leverage MCP’s potential to integrate LLMs effectively with their products and services.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Bichev/coinbase-chat-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server